67 research outputs found
Recommended from our members
Centralized content delivery infrastructure exploiting resource pools : performance models and asymptotics
textWe consider a centralized content delivery infrastructure where a large number of storage-intensive files are replicated across several collocated servers. To achieve scalable delays in file downloads under stochastic loads, we allow multiple servers to work together as a pooled resource to meet individual download requests. In such systems basic questions include: How and where to replicate files? How significant are the gains of resource pooling over policies which use single server per request? What are the tradeoffs among conflicting metrics such as delays, reliability and recovery costs, and power? How robust is performance to heterogeneity and choice of fairness criterion? In this thesis we provide a simple performance model for large systems towards addressing these basic questions. For large systems where the overall system load is proportional to the number of servers, we establish scaling laws among delays, system load, number of file replicas, demand heterogeneity, power, and network capacity.Electrical and Computer Engineerin
Splitting Algorithms for Fast Relay Selection: Generalizations, Analysis, and a Unified View
Relay selection for cooperative communications promises significant
performance improvements, and is, therefore, attracting considerable attention.
While several criteria have been proposed for selecting one or more relays,
distributed mechanisms that perform the selection have received relatively less
attention. In this paper, we develop a novel, yet simple, asymptotic analysis
of a splitting-based multiple access selection algorithm to find the single
best relay. The analysis leads to simpler and alternate expressions for the
average number of slots required to find the best user. By introducing a new
`contention load' parameter, the analysis shows that the parameter settings
used in the existing literature can be improved upon. New and simple bounds are
also derived. Furthermore, we propose a new algorithm that addresses the
general problem of selecting the best relays, and analyze and
optimize it. Even for a large number of relays, the algorithm selects the best
two relays within 4.406 slots and the best three within 6.491 slots, on
average. We also propose a new and simple scheme for the practically relevant
case of discrete metrics. Altogether, our results develop a unifying
perspective about the general problem of distributed selection in cooperative
systems and several other multi-node systems.Comment: 20 pages, 7 figures, 1 table, Accepted for publication in IEEE
Transactions on Wireless Communication
Adaptive Matching for Expert Systems with Uncertain Task Types
A matching in a two-sided market often incurs an externality: a matched
resource may become unavailable to the other side of the market, at least for a
while. This is especially an issue in online platforms involving human experts
as the expert resources are often scarce. The efficient utilization of experts
in these platforms is made challenging by the fact that the information
available about the parties involved is usually limited.
To address this challenge, we develop a model of a task-expert matching
system where a task is matched to an expert using not only the prior
information about the task but also the feedback obtained from the past
matches. In our model the tasks arrive online while the experts are fixed and
constrained by a finite service capacity. For this model, we characterize the
maximum task resolution throughput a platform can achieve. We show that the
natural greedy approaches where each expert is assigned a task most suitable to
her skill is suboptimal, as it does not internalize the above externality. We
develop a throughput optimal backpressure algorithm which does so by accounting
for the `congestion' among different task types. Finally, we validate our model
and confirm our theoretical findings with data-driven simulations via logs of
Math.StackExchange, a StackOverflow forum dedicated to mathematics.Comment: A part of it presented at Allerton Conference 2017, 18 page
OPTIMIZATION AND CHARACTERIZATION OF DOXORUBICIN LOADED SOLID LIPID NANOSUSPENSION FOR NOSE TO BRAIN DELIVERY USING DESIGN EXPERT SOFTWARE
Objective: The goal of the current study was to investigate the possible use of solid lipid nanosuspension (SLNs) as a drug delivery method to boost doxorubicin (DOX) brain-targeting performance after intranasal (i. n.) administration.
Methods: 33 factorial design was applied for optimization by using lipid concentration, surfactant concentration, and High-speed homogenizer (HSH) stirring time as dependent variables, and their effect was observed on particles size, Polydispersity index (PDI), and entrapment efficiency.
Results: With the composition of Compritol® 888 ATO (4.6 % w/v), tween 80 (1.9 % w/v), and HSH stirring time, the optimized formula DOX-SLNs prepared (10 min). Particle size, PDI, zeta potential, entrapment efficiency, percent in vitro release were found to be 167.47±6.09 nm, 0.23±0.02, 24.1 mV, 75.3±2.79, and 89.35±3.27 percent in 24 h, respectively, for optimized formulation (V-O). No major changes in particle size, zeta potential, and entrapping efficiency were found in the stability studies at 4±2 °C (refrigerator) and 25±2 °C/60±5% RH up to 3 mo.
Conclusion: Following the non-invasive nose-to-brain drug delivery, which is a promising therapeutic strategy, the positive findings confirmed the current optimized DOX-loaded SLNs formulation
Optimal Timer Based Selection Schemes
Timer-based mechanisms are often used to help a given (sink) node select the
best helper node among many available nodes. Specifically, a node transmits a
packet when its timer expires, and the timer value is a monotone non-increasing
function of its local suitability metric. The best node is selected
successfully if no other node's timer expires within a 'vulnerability' window
after its timer expiry, and so long as the sink can hear the available nodes.
In this paper, we show that the optimal metric-to-timer mapping that (i)
maximizes the probability of success or (ii) minimizes the average selection
time subject to a minimum constraint on the probability of success, maps the
metric into a set of discrete timer values. We specify, in closed-form, the
optimal scheme as a function of the maximum selection duration, the
vulnerability window, and the number of nodes. An asymptotic characterization
of the optimal scheme turns out to be elegant and insightful. For any
probability distribution function of the metric, the optimal scheme is
scalable, distributed, and performs much better than the popular inverse metric
timer mapping. It even compares favorably with splitting-based selection, when
the latter's feedback overhead is accounted for.Comment: 21 pages, 6 figures, 1 table, submitted to IEEE Transactions on
Communications, uses stackrel.st
Go Green Initiative from Google: A Study of Evolution in Teaching and Learning Environment by Google Classroom
Google provides Classroom which is a free web-based platform that integrates Google Apps for Education account with all your Google Apps services, including Google Docs, Gmail, and Google Calendar. In its Go Green initiative, Google Classroom saves time and paper, and makes it easy to create classes, distribute assignments, communicate, and stay organized in very effective manner. In present paper many functions of Google Classroom are evaluated.Using Google Classroom Teachers can quickly see who has or hasn\u27t completed the work, and provide direct, real-time feedback and grades right in Google Classroom. From the study we can say that this is the best Go Green Initiative from google via its Google Classroom platform
Adaptive Matching for Expert Systems with Uncertain Task Types
International audienceA matching in a two-sided market often incurs an externality: a matched resource maybecome unavailable to the other side of the market, at least for a while. This is especiallyan issue in online platforms involving human experts as the expert resources are often scarce.The efficient utilization of experts in these platforms is made challenging by the fact that theinformation available about the parties involved is usually limited.To address this challenge, we develop a model of a task-expert matching system where atask is matched to an expert using not only the prior information about the task but alsothe feedback obtained from the past matches. In our model the tasks arrive online while theexperts are fixed and constrained by a finite service capacity. For this model, we characterizethe maximum task resolution throughput a platform can achieve. We show that the naturalgreedy approaches where each expert is assigned a task most suitable to her skill is suboptimal,as it does not internalize the above externality. We develop a throughput optimal backpressurealgorithm which does so by accounting for the ‘congestion’ among different task types. Finally,we validate our model and confirm our theoretical findings with data-driven simulations vialogs of Math.StackExchange, a StackOverflow forum dedicated to mathematic
Poly-symmetry in processor-sharing systems
International audienceWe consider a system of processor-sharing queues with state-dependent service rates. These are allocated according to balanced fairness within a polymatroid capacity set. Balanced fairness is known to be both insensitive and Pareto-efficient in such systems, which ensures that the performance metrics, when computable, will provide robust insights into the real performance of the system considered. We first show that these performance metrics can be evaluated with a complexity that is polynomial in the system size if the system is partitioned into a finite number of parts, so that queues are exchangeable within each part and asymmetric across different parts. This in turn allows us to derive stochastic bounds for a larger class of systems which satisfy less restrictive symmetry assumptions. These results are applied to practical examples of tree data networks, such as backhaul networks of Internet service providers, and computer clusters
- …